539 research outputs found

    Towards Practical Oblivious RAM

    Full text link
    We take an important step forward in making Oblivious RAM (O-RAM) practical. We propose an O-RAM construction achieving an amortized overhead of 20X-35X (for an O-RAM roughly 1 terabyte in size), about 63 times faster than the best existing scheme. On the theoretic front, we propose a fundamentally novel technique for constructing Oblivious RAMs: specifically, we partition a bigger O-RAM into smaller O-RAMs, and employ a background eviction technique to obliviously evict blocks from the client-side cache into a randomly assigned server-side partition. This novel technique is the key to achieving the gains in practical performance

    Link Prediction by De-anonymization: How We Won the Kaggle Social Network Challenge

    Full text link
    This paper describes the winning entry to the IJCNN 2011 Social Network Challenge run by Kaggle.com. The goal of the contest was to promote research on real-world link prediction, and the dataset was a graph obtained by crawling the popular Flickr social photo sharing website, with user identities scrubbed. By de-anonymizing much of the competition test set using our own Flickr crawl, we were able to effectively game the competition. Our attack represents a new application of de-anonymization to gaming machine learning contests, suggesting changes in how future competitions should be run. We introduce a new simulated annealing-based weighted graph matching algorithm for the seeding step of de-anonymization. We also show how to combine de-anonymization with link prediction---the latter is required to achieve good performance on the portion of the test set not de-anonymized---for example by training the predictor on the de-anonymized portion of the test set, and combining probabilistic predictions from de-anonymization and link prediction.Comment: 11 pages, 13 figures; submitted to IJCNN'201

    Hybrid Consensus: Efficient Consensus in the Permissionless Model

    Get PDF
    Consensus, or state machine replication is a foundational building block of distributed systems and modern cryptography. Consensus in the classical, "permissioned" setting has been extensively studied in the 30 years of distributed systems literature. Recent developments in Bitcoin and other decentralized cryptocurrencies popularized a new form of consensus in a "permissionless" setting, where anyone can join and leave dynamically, and there is no a-priori knowledge of the number of consensus nodes. So far, however, all known permissionless consensus protocols assume network synchrony, i.e., the protocol must know an upper bound of the network\u27s delay, and transactions confirm slower than this a-priori upper bound. We initiate the study of the feasibilities and infeasibilities of achieving responsiveness in permissionless consensus. In a responsive protocol, the transaction confirmation time depends only on the actual network delay, but not on any a-priori known upper bound such as a synchronous round. Classical protocols in the partial synchronous and asynchronous models naturally achieve responsiveness, since the protocol does not even know any delay upper bound. Unfortunately, we show that in the permissionless setting, consensus is impossible in the asynchronous or partially synchronous models. On the positive side, we construct a protocol called Hybrid Consensus by combining classical-style and blockchain-style consensus. Hybrid Consensus shows that responsiveness is nonetheless possible to achieve in permissionless consensus (assuming proof-of-work) when 1) the protocol knows an upper bound on the network delay; 2) we allow a non-responsive warmup period after which transaction confirmation can become responsive; 3) honesty has some stickiness, i.e., it takes a short while for an adversary to corrupt a node or put it to sleep; and 4) less than 1/3 of the nodes are corrupt. We show that all these conditions are in fact necessary - if only one of them is violated, responsiveness would have been impossible. Our work makes a step forward in our understanding of the permissionless model and its differences and relations to classical consensus

    Differentially Oblivious Turing Machines

    Get PDF
    Oblivious RAM (ORAM) is a machinery that protects any RAM from leaking information about its secret input by observing only the access pattern. It is known that every ORAM must incur a logarithmic overhead compared to the non-oblivious RAM. In fact, even the seemingly weaker notion of differential obliviousness, which intuitively "protects" a single access by guaranteeing that the observed access pattern for every two "neighboring" logical access sequences satisfy (?,?)-differential privacy, is subject to a logarithmic lower bound. In this work, we show that any Turing machine computation can be generically compiled into a differentially oblivious one with only doubly logarithmic overhead. More precisely, given a Turing machine that makes N transitions, the compiled Turing machine makes O(N ? log log N) transitions in total and the physical head movements sequence satisfies (?,?)-differential privacy (for a constant ? and a negligible ?). We additionally show that ?(log log N) overhead is necessary in a natural range of parameters (and in the balls and bins model). As a corollary, we show that there exist natural data structures such as stack and queues (supporting online operations) on N elements for which there is a differentially oblivious implementation on a Turing machine incurring amortized O(log log N) overhead per operation, while it is known that any oblivious implementation must consume ?(log N) operations unconditionally even on a RAM. Therefore, we obtain the first unconditional separation between obliviousness and differential obliviousness in the most natural setting of parameters where ? is a constant and ? is negligible. Before this work, such a separation was only known in the balls and bins model. Note that the lower bound applies in the RAM model while our upper bound is in the Turing machine model, making our separation stronger

    Foundations of Transaction Fee Mechanism Design

    Get PDF
    In blockchains such as Bitcoin and Ethereum, users compete in a transaction fee auction to get their transactions confirmed in the next block. A line of recent works set forth the desiderata for a "dream" transaction fee mechanism (TFM), and explored whether such a mechanism existed. A dream TFM should satisfy 1) user incentive compatibility (UIC), i.e., truthful bidding should be a user's dominant strategy; 2) miner incentive compatibility (MIC), i.e., the miner's dominant strategy is to faithfully implement the prescribed mechanism; and 3) miner-user side contract proofness (SCP), i.e., no coalition of the miner and one or more user(s) can increase their joint utility by deviating from the honest behavior. The weakest form of SCP is called 1-SCP, where we only aim to provide resilience against the collusion of the miner and a single user. Sadly, despite the various attempts, to the best of knowledge, no existing mechanism can satisfy all three properties in all situations. Since the TFM departs from classical mechanism design in modeling and assumptions, to date, our understanding of the design space is relatively little. In this paper, we further unravel the mathematical structure of transaction fee mechanism design by proving the following results: - Can we have a dream TFM? - Rethinking the incentive compatibility notions. - Do the new design elements make a difference

    MPC for MPC: Secure Computation on a Massively Parallel Computing Architecture

    Get PDF
    Massively Parallel Computation (MPC) is a model of computation widely believed to best capture realistic parallel computing architectures such as large-scale MapReduce and Hadoop clusters. Motivated by the fact that many data analytics tasks performed on these platforms involve sensitive user data, we initiate the theoretical exploration of how to leverage MPC architectures to enable efficient, privacy-preserving computation over massive data. Clearly if a computation task does not lend itself to an efficient implementation on MPC even without security, then we cannot hope to compute it efficiently on MPC with security. We show, on the other hand, that any task that can be efficiently computed on MPC can also be securely computed with comparable efficiency. Specifically, we show the following results: - any MPC algorithm can be compiled to a communication-oblivious counterpart while asymptotically preserving its round and space complexity, where communication-obliviousness ensures that any network intermediary observing the communication patterns learn no information about the secret inputs; - assuming the existence of Fully Homomorphic Encryption with a suitable notion of compactness and other standard cryptographic assumptions, any MPC algorithm can be compiled to a secure counterpart that defends against an adversary who controls not only intermediate network routers but additionally up to 1/3 - ? fraction of machines (for an arbitrarily small constant ?) - moreover, this compilation preserves the round complexity tightly, and preserves the space complexity upto a multiplicative security parameter related blowup. As an initial exploration of this important direction, our work suggests new definitions and proposes novel protocols that blend algorithmic and cryptographic techniques

    What Can Cryptography Do for Decentralized Mechanism Design?

    Get PDF
    Recent works of Roughgarden (EC\u2721) and Chung and Shi (SODA\u2723) initiate the study of a new decentralized mechanism design problem called transaction fee mechanism design (TFM). Unlike the classical mechanism design literature, in the decentralized environment, even the auctioneer (i.e., the miner) can be a strategic player, and it can even collude with a subset of the users facilitated by binding side contracts. Chung and Shi showed two main impossibility results that rule out the existence of a dream TFM. First, any TFM that provides incentive compatibility for individual users and miner-user coalitions must always have zero miner revenue, no matter whether the block size is finite or infinite. Second, assuming finite block size, no non-trivial TFM can simultaenously provide incentive compatibility for any individual user, and for any miner-user coalition. In this work, we explore what new models and meaningful relaxations can allow us to circumvent the impossibility results of Chung and Shi. Besides today’s model that does not employ cryptography, we introduce a new MPC-assisted model where the TFM is implemented by a joint multi-party computation (MPC) protocol among the miners. We prove several feasibility and infeasibility results for achieving strict and approximate incentive compatibility, respectively, in the plain model as well as the MPC-assisted model. We show that while cryptography is not a panacea, it indeed allows us to overcome some impossibility results pertaining to the plain model, leading to non-trivial mechanisms with useful guarantees that are otherwise impossible in the plain model. Our work is also the first to characterize the mathematical landscape of transaction fee mechanism design under approximate incentive compatibility, as well as in a cryptography-assisted model

    Analysis of Deterministic Longest-Chain Protocols

    Get PDF
    Most classical consensus protocols rely on a leader to coordinate nodes’ voting efforts. One novel idea that stems from blockchain-style consensus is to rely, instead, on a “longest-chain” idea for such coordination. Such a longest-chain idea was initially considered in randomized protocols, where in each round, a node has some probability of being elected a leader who can propose the next block. Recently, well-known systems have started implementing the deterministic counterpart of such longest-chain protocols — the deterministic counterpart is especially attractive since it is even simpler to implement than their randomized cousins. A notable instantiation is the Aura protocol which is shipped with Parity’s open-source Ethereum implementation. Interestingly, mathematical analyses of deterministic, longest-chain protocols are lacking even though there exist several analyses of randomized versions. In this paper, we provide the first formal analysis of deterministic, longest-chain-style consensus. We show that a variant of the Aura protocol can defend against a Byzantine adversary that controls fewer than 1/3 fraction of the nodes, and this resilience parameter is tight by some technical interpretation. Based on insights gained through our mathematical treatment, we point out that Aura’s concrete instantiation actually fails to achieve the resiliene level they claim. Finally, while our tight proof for the longest-chain protocol is rather involved and non-trivial; we show that a variant of the “longest-chain” idea which we call “largest-set” enables a textbook construction that admits a simple proof (albeit with slower confirmation)
    • …
    corecore